Detecting Answer Copying Using the Kappa Statistic

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Detecting Answer Copying on High - Stakes Tests

C heating is a potentially serious problem on high-stakes tests, such as the Multistate Bar Exam (MBE), because it can result in under-qualified individuals receiving their Board’s certification to practice, thereby undermining the certification process. Because cheating on exams has such serious ramifications, it is important for test administrators to minimize cheating opportunities and detec...

متن کامل

Interrater reliability: the kappa statistic

The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While ther...

متن کامل

The Kappa Statistic: A Second Look

In recent years, the coefficient of agreement has become the de facto standard to evaluate intercoder agreement for tagging tasks. In this squib, we highlight issues that affect and that the community has largely neglected. First, we discuss the assumptions underlying different computations of the expected agreement component of . Second, we discuss how prevalence and bias affect the measure. I...

متن کامل

Confidence intervals for the kappa statistic

The command kapci calculates 100(1 − α)% confidence intervals for the kappa statistic using an analytical method in the case of dichotomous variables or bootstrap for more complex situations. For instance, kapci allows estimating CI for polychotomous variables using weighted kappa or for cases in which there are more than 2 raters/replications.

متن کامل

Understanding interobserver agreement: the kappa statistic.

Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Psychological Measurement

سال: 2006

ISSN: 0146-6216,1552-3497

DOI: 10.1177/0146621606288891